Search results

1 – 10 of 227
Article
Publication date: 12 December 2022

Afshin Yaghoubi and Seyed Taghi Akhavan Niaki

One of the common approaches to improve systems reliability is using standby redundancy. Although many works are available in the literature on the applications of standby…

Abstract

Purpose

One of the common approaches to improve systems reliability is using standby redundancy. Although many works are available in the literature on the applications of standby redundancy, the system components are assumed to be independent of each other. But, in reality, the system components can be dependent on one another, causing the failure of each component to affect the failure rate of the remaining active components. In this paper, a standby two-unit system is considered, assuming a dependency between the switch and its associated active component.

Design/methodology/approach

This paper assumes that the failures between the switch and its associated active component follow the Marshall–Olkin exponential bivariate exponential distribution. Then, the reliability analysis of the system is done using the continuous-time Markov chain method.

Findings

The derived equations application to determine the system steady-state availability, system reliability and sensitivity analysis on the mean time to failure is demonstrated using a numerical illustration.

Originality/value

All previous models assumed independency between the switch and the associated active unit in the standby redundancy approach. In this paper, the switch and its associated component are assumed to be dependent on each other.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 5 March 2018

Hamidreza Izadbakhsh, Rassoul Noorossana and Seyed Taghi Akhavan Niaki

The purpose of this paper is to apply Poisson generalized linear model (PGLM) with log link instead of multinomial logistic regression to monitor multinomial logistic profiles in…

Abstract

Purpose

The purpose of this paper is to apply Poisson generalized linear model (PGLM) with log link instead of multinomial logistic regression to monitor multinomial logistic profiles in Phase I. Hence, estimating the coefficients becomes easier and more accurate.

Design/methodology/approach

Simulation technique is used to assess the performance of the proposed algorithm using four different control charts for monitoring.

Findings

The proposed algorithm is faster and more accurate than the previous algorithms. Simulation results also indicate that the likelihood ratio test method is able to detect out-of-control parameters more efficiently.

Originality/value

The PGLM with log link has not been used to monitor multinomial profiles in Phase I.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 6 August 2018

Amir Hossein Niknamfar, Seyed Armin Akhavan Niaki and Marziyeh karimi

The purpose of this study is to develop a novel and practical series-parallel inventory-redundancy allocation system in a green supply chain including a single manufacturer and…

Abstract

Purpose

The purpose of this study is to develop a novel and practical series-parallel inventory-redundancy allocation system in a green supply chain including a single manufacturer and multiple retailers operating in several positions without any conflict of interests. The manufacturer first produces multi-product and then dispatches them to the retailers at different wholesale prices based on a common replenishment cycle policy. In contrast, the retailers sell the purchased products to customers at different retail prices. In this way, the manufacturer encounters a redundancy allocation problem (RAP), in which the solution subsequently enhances system production reliability. Furthermore, to emphasize on global warming and human health concerns, this paper pays attention both the tax cost of industrial greenhouse gas (GHG) emissions of all produced products and the limitation for total GHG emissions.

Design/methodology/approach

The manufacturer intends not only to maximize the total net profit but also to minimize the mean time to failure of his production system using a RAP. To achieve these objectives, the max-min approach associated with the solution method known as the interior point method is utilized to maximize the minimum (the worst) value of the objective functions. Finally, numerical experiments are presented to further demonstrate the applicability of the proposed methodology. Sensitivity analysis on the green supply chain approach is also performed to obtain more insight.

Findings

The computational results showed that increasing the number of products and retailers might lead into a substantial increase in the total net profit. This indicated that the manufacturer would feel adding a new retailer to the green supply chain strongly. Moreover, an increase in the number of machines provides significant improvement in the reliability of the production system. Furthermore, the results of the performed sensitivity analysis on the green approach indicated that increasing the number of machines has a substantial impact on both the total net profit and the total tax cost. In addition, not only the proposed green supply chain was more efficient than the supply chain without green but also the proposed green supply chain was very sensitive to the tax cost of GHG emission rather than the number of machines.

Originality/value

In summary, the motivations are as follows: the development of a bi-objective series-parallel inventory-RAP in a green supply chain; proposing a hybrid inventory-RAP; and considering the interior point solution method. The novel method comes from both theoretical and experimental techniques. The paper also has industrial applications. The advantage of using the proposed approach is to generate additional opportunities and cost effectiveness for businesses and companies that operate utilizing the green supply chain under an inventory model.

Article
Publication date: 12 February 2018

Seyed Hamid Reza Pasandideh, Seyed Taghi Akhavan Niaki and Pejman Ahmadi

In this paper, the joint replenishment problem is modeled for a two-level supply chain consisting of a single supplier and multiple retailers that use the vendor-managed inventory…

Abstract

Purpose

In this paper, the joint replenishment problem is modeled for a two-level supply chain consisting of a single supplier and multiple retailers that use the vendor-managed inventory (VMI) policy for several products. This paper aims to find the optimal number of products to order in both policies, the optimal times at which each retailer orders the products in the traditional policy and the optimal times at which the supplier orders the product in the VMI policy.

Design/methodology/approach

The problem is first formulated into the framework of a constrained integer nonlinear programming model; then, the problem is solved using a teacher-learner based optimization algorithm. As there are no benchmarks available in the literature, a genetic algorithm is used as well to validate the results obtained.

Findings

The solutions obtained using both the algorithms for several numerical examples are compared to the ones of a random search procedure for further validation. A real case is solved at the end to demonstrate the applicability of the proposed methodology and to compare both the policies.

Research limitations/implications

The paper does not have any special limitations.

Practical implications

The study has significant practical implications for the sellers and for the suppliers who have to get the most profit. Also, satisfying the constraints make decision more complicated.

Originality/value

This paper has two main originalities. The authors have developed the model of the joint replenishment problem and have contributed in the problem-solving process. They have used a new meta-heuristic and then compared it to a classic one.

Details

Journal of Modelling in Management, vol. 13 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 29 April 2014

S.T.A. Niaki and Majid Khedmati

The purpose of this paper is to propose two control charts to monitor multi-attribute processes and then a maximum likelihood estimator for the change point of the parameter…

Abstract

Purpose

The purpose of this paper is to propose two control charts to monitor multi-attribute processes and then a maximum likelihood estimator for the change point of the parameter vector (process fraction non-conforming) of multivariate binomial processes.

Design/methodology/approach

The performance of the proposed estimator is evaluated for both control charts using some simulation experiments. At the end, the applicability of the proposed method is illustrated using a real case.

Findings

The proposed estimator provides accurate and useful estimation of the change point for almost all of the shift magnitudes, regardless of the process dimension. Moreover, based on the results obtained the estimator is robust with regard to different correlation values.

Originality/value

To the best of authors’ knowledge, there are no work available in the literature to estimate the change-point of multivariate binomial processes.

Article
Publication date: 3 November 2022

Reza Edris Abadi, Mohammad Javad Ershadi and Seyed Taghi Akhavan Niaki

The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of…

Abstract

Purpose

The overall goal of the data mining process is to extract information from an extensive data set and make it understandable for further use. When working with large volumes of unstructured data in research information systems, it is necessary to divide the information into logical groupings after examining their quality before attempting to analyze it. On the other hand, data quality results are valuable resources for defining quality excellence programs of any information system. Hence, the purpose of this study is to discover and extract knowledge to evaluate and improve data quality in research information systems.

Design/methodology/approach

Clustering in data analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found. In this study, data extracted from an information system are used in the first stage. Then, the data quality results are classified into an organized structure based on data quality dimension standards. Next, clustering algorithms (K-Means), density-based clustering (density-based spatial clustering of applications with noise [DBSCAN]) and hierarchical clustering (balanced iterative reducing and clustering using hierarchies [BIRCH]) are applied to compare and find the most appropriate clustering algorithms in the research information system.

Findings

This paper showed that quality control results of an information system could be categorized through well-known data quality dimensions, including precision, accuracy, completeness, consistency, reputation and timeliness. Furthermore, among different well-known clustering approaches, the BIRCH algorithm of hierarchical clustering methods performs better in data clustering and gives the highest silhouette coefficient value. Next in line is the DBSCAN method, which performs better than the K-Means method.

Research limitations/implications

In the data quality assessment process, the discrepancies identified and the lack of proper classification for inconsistent data have led to unstructured reports, making the statistical analysis of qualitative metadata problems difficult and thus impossible to root out the observed errors. Therefore, in this study, the evaluation results of data quality have been categorized into various data quality dimensions, based on which multiple analyses have been performed in the form of data mining methods.

Originality/value

Although several pieces of research have been conducted to assess data quality results of research information systems, knowledge extraction from obtained data quality scores is a crucial work that has rarely been studied in the literature. Besides, clustering in data quality analysis and exploiting the outputs allows practitioners to gain an in-depth and extensive look at their information to form some logical structures based on what they have found.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 17 March 2020

M.M. Ershadi, M.J. Ershadi and S.T.A. Niaki

Healthcare failure mode and effect analysis (HFMEA) identifies potential risks and defines preventive actions to reduce the effects of risks. In addition, a discrete event…

Abstract

Purpose

Healthcare failure mode and effect analysis (HFMEA) identifies potential risks and defines preventive actions to reduce the effects of risks. In addition, a discrete event simulation (DES) could evaluate the effects of every improvement scenario. Consequently, a proposed integrated HFMEA-DES model is presented for quality improvement in a general hospital.

Design/methodology/approach

In the proposed model, HFMEA is implemented first. As any risk in the hospital is important and that there are many departments and different related risks, all defined risk factors are evaluated using the risk priority number (RPN) for which related corrective actions are defined based on experts' knowledge. Then, a DES model is designed to determine the effects of selected actions before implementation.

Findings

Results show that the proposed model not only supports different steps of HFMEA but also is highly in accordance with the determination of real priorities of the risk factors. It predicts the effects of corrective actions before implementation and helps hospital managers to improve performances.

Practical implications

This research is based on a case study in a well-known general hospital in Iran.

Originality/value

This study takes the advantages of an integrated HFMEA-DES model in supporting the limitation of HFMEA in a general hospital with a large number of beds and patients. The case study proves the effectiveness of the proposed approach for improving the performances of the hospital resources.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 1
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 4 July 2023

Karim Atashgar and Mahnaz Boush

When a process experiences an out-of-control condition, identification of the change point is capable of leading practitioners to an effective root cause analysis. The change…

Abstract

Purpose

When a process experiences an out-of-control condition, identification of the change point is capable of leading practitioners to an effective root cause analysis. The change point addresses the time when a special cause(s) manifests itself into the process. In the statistical process monitoring when the chart signals an out-of-control condition, the change point analysis is an important step for the root cause analysis of the process. This paper attempts to propose a model approaching the artificial neural network to identify the change point of a multistage process with cascade property in the case that the process is modeled properly by a simple linear profile.

Design/methodology/approach

In practice, many processes can be modeled by a functional relationship rather than a single random variable or a random vector. This approach of modeling is referred to as the profile in the statistical process control literature. In this paper, two models based on multilayer perceptron (MLP) and convolutional neural network (CNN) approaches are proposed for identifying the change point of the profile of a multistage process.

Findings

The capability of the proposed models are evaluated and compared using several numerical scenarios. The numerical analysis of the proposed neural networks indicates that the two proposed models are capable of identifying the change point in different scenarios effectively. The comparative sensitivity analysis shows that the capability of the proposed convolutional network is superior compared to MLP network.

Originality/value

To the best of the authors' knowledge, this is the first time that: (1) A model is proposed to identify the change point of the profile of a multistage process. (2) A convolutional neural network is modeled for identifying the change point of an out-of-control condition.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 28 December 2020

Iman Bahrami, Roya M. Ahari and Milad Asadpour

In emergency services, maximizing population coverage with the lowest cost at the peak of the demand is important. In addition, due to the nature of services in emergency centers…

Abstract

Purpose

In emergency services, maximizing population coverage with the lowest cost at the peak of the demand is important. In addition, due to the nature of services in emergency centers, including hospitals, the number of servers and beds is actually considered as the capacity of the system. Hence, the purpose of this paper is to propose a multi-objective maximal covering facility location model for emergency service centers within an M (t)/M/m/m queuing system considering different levels of service and periodic demand rate.

Design/methodology/approach

The process of serving patients is modeled according to queuing theory and mathematical programming. To cope with multi-objectiveness of the proposed model, an augmented ε-constraint method has been used within GAMS software. Since the computational time ascends exponentially as the problem size increases, the GAMS software is not able to solve large-scale problems. Thus, a NSGA-II algorithm has been proposed to solve this category of problems and results have been compared with GAMS through random generated sample problems. In addition, the applicability of the proposed model in real situations has been examined within a case study in Iran.

Findings

Results obtained from the random generated sample problems illustrated while both the GAMS software and NSGA-II almost share the same quality of solution, the CPU execution time of the proposed NSGA-II algorithm is lower than GAMS significantly. Furthermore, the results of solving the model for case study approve that the model is able to determine the location of the required facilities and allocate demand areas to them appropriately.

Originality/value

In the most of previous works on emergency services, maximal coverage with the minimum cost were the main objectives. Hereby, it seems that minimizing the number of waiting patients for receiving services have been neglected. To the best of the authors’ knowledge, it is the first time that a maximal covering problem is formulated within an M (t)/M/m/m queuing system. This novel formulation will lead to more satisfaction for injured people by minimizing the average number of injured people who are waiting in the queue for receiving services.

Details

Journal of Modelling in Management, vol. 16 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 17 March 2023

Le Wang, Liping Zou and Ji Wu

This paper aims to use artificial neural network (ANN) methods to predict stock price crashes in the Chinese equity market.

Abstract

Purpose

This paper aims to use artificial neural network (ANN) methods to predict stock price crashes in the Chinese equity market.

Design/methodology/approach

Three ANN models are developed and compared with the logistic regression model.

Findings

Results from this study conclude that the ANN approaches outperform the traditional logistic regression model, with fewer hidden layers in the ANN model having superior performance compared to the ANNs with multiple hidden layers. Results from the ANN approach also reveal that foreign institutional ownership, financial leverage, weekly average return and market-to-book ratio are the important variables when predicting stock price crashes, consistent with results from the traditional logistic model.

Originality/value

First, the ANN framework has been used in this study to forecast the stock price crashes and compared to the traditional logistic model in the world’s largest emerging market China. Second, the receiver operating characteristics curves and the area under the ROC curve have been used to evaluate the forecasting performance between the ANNs and the traditional approaches, in addition to some traditional performance evaluation methods.

Details

Pacific Accounting Review, vol. 35 no. 4
Type: Research Article
ISSN: 0114-0582

Keywords

1 – 10 of 227